9 research outputs found

    What Am I Testing and Where? Comparing Testing Procedures based on Lightweight Requirements Annotations

    Get PDF
    [Context] The testing of software-intensive systems is performed in different test stages each having a large number of test cases. These test cases are commonly derived from requirements. Each test stages exhibits specific demands and constraints with respect to their degree of detail and what can be tested. Therefore, specific test suites are defined for each test stage. In this paper, the focus is on the domain of embedded systems, where, among others, typical test stages are Software- and Hardware-in-the-loop. [Objective] Monitoring and controlling which requirements are verified in which detail and in which test stage is a challenge for engineers. However, this information is necessary to assure a certain test coverage, to minimize redundant testing procedures, and to avoid inconsistencies between test stages. In addition, engineers are reluctant to state their requirements in terms of structured languages or models that would facilitate the relation of requirements to test executions. [Method] With our approach, we close the gap between requirements specifications and test executions. Previously, we have proposed a lightweight markup language for requirements which provides a set of annotations that can be applied to natural language requirements. The annotations are mapped to events and signals in test executions. As a result, meaningful insights from a set of test executions can be directly related to artifacts in the requirements specification. In this paper, we use the markup language to compare different test stages with one another. [Results] We annotate 443 natural language requirements of a driver assistance system with the means of our lightweight markup language. The annotations are then linked to 1300 test executions from a simulation environment and 53 test executions from test drives with human drivers. Based on the annotations, we are able to analyze how similar the test stages are and how well test stages and test cases are aligned with the requirements. Further, we highlight the general applicability of our approach through this extensive experimental evaluation. [Conclusion] With our approach, the results of several test levels are linked to the requirements and enable the evaluation of complex test executions. By this means, practitioners can easily evaluate how well a systems performs with regards to its specification and, additionally, can reason about the expressiveness of the applied test stage.TU Berlin, Open-Access-Mittel - 202

    Requirements-based Simulation Execution for Virtual Validation of Autonomous Systems

    Get PDF
    The complexity of software is rapidly increasing in many domains. Therefore, simulations have become established as a testing tool in recent years. Especially the virtual validation of autonomous systems leads to increasingly complex simulation environments. Nevertheless, the scenarios and the simulation results are not linked to the requirements. To close this gap, we develop a lightweight approach that allows the user to extract functional information. Simulation results can then be presented in different levels of detail in the original requirements. This replaces difficult translations of requirements and allows permanent comparison at all test levels

    Extraction of System States from Natural Language Requirements

    Get PDF
    In recent years, simulations have proven to be an important means to verify the behavior of complex software systems. The different states of a system are monitored in the simulations and are compared against the requirements specification. So far, system states in natural language requirements cannot be automatically linked to signals from the simulation. However, the manual mapping between requirements and simulation is a time-consuming task. Named-entity Recognition is a sub-task from the field of automated information retrieval and is used to classify parts of natural language texts into categories. In this paper, we use a self-trained Named-entity Recognition model with Bidirectional LSTMs and CNNs to extract states from requirements specifications. We present an almost entirely automated approach and an iterative semi-automated approach to train our model. The automated and iterative approach are compared and discussed with respect to the usual manual extraction. We show that the manual extraction of states in 2,000 requirements takes nine hours. Our automated approach achieves an F1-score of 0.51 with 15 minutes of manual work and the iterative approach achieves an F1-score of 0.62 with 100 minutes of work

    Strategies and Best Practices for Model-based Systems Engineering Adoption in Embedded Systems Industry

    Get PDF
    [Context] Model-based Systems Engineering (MBSE) advocates the integrated use of models throughout all development phases of a system development life-cycle. It is also often suggested as a solution to cope with the challenges of engineering complex systems. However, MBSE adoption is no trivial task and companies, especially large ones, struggle to achieve it in a timely and effective way. [Goal] We aim to discover what are the best practices and strategies to implement MBSE in companies that develop embedded software systems. [Method] Using an inductive-deductive research approach, we conducted 14 semi-structured interviews with experts from 10 companies. Further, we analyzed the data and drew some conclusions which were validated by an on-line questionnaire in a triangulation fashion. [Results] Our findings are summarized in an empirically validated list of 18 best practices for MBSE adoption and through a prioritized list of the 5 most important best practices. [Conclusions] Raising engineers’ awareness regarding MBSE advantages and acquiring experience through small projects are considered the most important practices to increase the success of MBSE adoption.BMBF, 01IS15058, Verbundprojekt SPEDiT: Software Platform Embedded Systems - Dissemination und Transfe

    Supporting the Development of Cyber-Physical Systems with Natural Language Processing: A Report

    Get PDF
    Software has become the driving force for innovations in any technical system that observes the environment with different sensors and influence it by controlling a number of actuators; nowadays called Cyber-Physical System (CPS). The development of such systems is inherently inter-disciplinary and often contains a number of independent subsystems. Due to this diversity, the majority of development information is expressed in natural language artifacts of all kinds. In this paper, we report on recent results that our group has developed to support engineers of CPSs in working with the large amount of information expressed in natural language. We cover the topics of automatic knowledge extraction, expert systems, and automatic requirements classification. Furthermore, we envision that natural language processing will be a key component to connect requirements with simulation models and to explain tool-based decisions. We see both areas as promising for supporting engineers of CPSs in the future

    Should I stay or should I go? : On forces that drive and prevent MBSE adoption in the embedded systems industry

    Get PDF
    [Context] Model-based Systems Engineering (MBSE) comprises a set of models and techniques that is often suggested as solution to cope with the challenges of engineering complex systems. Although many practitioners agree with the arguments on the potential benefits of the techniques, companies struggle with the adoption of MBSE. [Goal] In this paper, we investigate the forces that prevent or impede the adoption of MBSE in companies that develop embedded software systems. We contrast the hindering forces with issues and challenges that drive these companies towards introducing MBSE. [Method] Our results are based on 20 interviews with experts from 10 companies. Through exploratory research, we analyze the results by means of thematic coding. [Results] Forces that prevent MBSE adoption mainly relate to immature tooling, uncertainty about the return-on-investment, and fears on migrating existing data and processes. On the other hand, MBSE adoption also has strong drivers and participants have high expectations mainly with respect to managing complexity, adhering to new regulations, and reducing costs. [Conclusions] We conclude that bad experiences and frustration about MBSE adoption originate from false or too high expectations. Nevertheless, companies should not underestimate the necessary efforts for convincing employees and addressing their anxiety

    VerknĂĽpfung von Anforderungen und SystemausfĂĽhrungen mithilfe einer Multilevel Markup Language

    Get PDF
    Die Softwareentwicklung in nahezu allen Bereichen basiert auf Software- und Testspezifikationen. Die stetig wachsende Komplexität von Softwarefunktionen führt zu einem wachsenden Anforderungs- und Testmanagement. Daraus resultiert eine Vielzahl von Herausforderungen um zu gewährleisten, dass durchgeführte Tests alle zuvor gestellten Anforderungen erfüllen. Der Einsatz von natürlichsprachlichen Anforderungen ermöglicht eine uneingeschränkte Spezifizierung der Software, erschwert jedoch die automatisierte Weiterverarbeitung für den Entwicklungs- und Testprozess, da die Vielfalt der Sprache nur unzureichend von heutigen Algorithmen verarbeitet werden kann. Manuelle Vorgehensweisen sind daher bis heute unersetzbar. In dieser Arbeit präsentiere ich einen Testansatz, der es ermöglicht Informationen mithilfe von Textmarkierungen aus natürlichsprachlichen Anforderungen zu extrahieren und für Tests weiterzuverarbeiten. Die Testergebnisse werden anschließend in den Anforderungen angezeigt. Dadurch werden durchgeführte Tests direkt mit Anforderungen verknüpft. Der erste Beitrag dieser Arbeit umfasst die Entwicklung einer Markierungssprache, die es den Anwender:innen ermöglicht, gezielt Texte in Anforderungsdokumenten zu markieren. Der zweite Beitrag vergleicht zwei Vorgehensweisen zur Entwicklung einer automatisierten Erkennung von Systemzuständen in natürlichsprachlichen Anforderungen. Der resultierende Algorithmus kann die Anwender:innen beim Markieren von Textpassagen unterstützen und basiert auf Methoden des maschinellen Lernens. Als dritten Beitrag präsentiere ich eine umfassende Analysemethode des Softwareverhaltens. Hierbei basieren die Auswertungen auf den markierten Textstellen der Anwender:innen. Das ermöglicht die beurteilung der Testresultate in Kontext der Anforderungen. Der vierte Beitrag untersucht die Nutzbarkeit der entwickelten Markierungssprache anhand einer Interviewstudie mit Probanden aus der Praxis. Hierbei testeten die Probanden den Testansatz und beurteilten ihn

    Simulation methods and tools for collaborative embedded systems: with focus on the automotive smart ecosystems

    No full text
    Embedded Systems are increasingly equipped with open interfaces that enable communication and collaboration with other embedded systems. Collaborative embedded systems (CES) can be seen as an emerging new class of systems which, although individually designed and developed, can form collaborations at runtime. When embedded systems collaborate with each other, functions developed independently need to be integrated for performing evaluation of the resulting system in order to discover unwanted side-effects. Traditionally, early-stage validation and verification (V&V) of systems composed of collaborative subsystems is performed by function integration at design time. Simulation is used at this stage to verify system’s behaviour in a predefined set of test scenarios. In this paper we provide a survey of simulation methods and tools for the V&V of CES. In the context of one use case from the automotive domain (vehicle platooning) we present solutions (methods and tools) and challenges brought by evaluating vehicle collaboration using simulation.BMBF, 01IS16043R, Verbundprojekt CrESt: Modellbasierte Entwicklung kollaborativer eingebetteter System
    corecore